I always wondered why I thought Keyshot’s denoising was rather aggressive and I realized today that it is using the mesh normals rather than calculating the normals at render.
You can preview bump maps one material at a time, although that doesn’t always seem to work when using the “bump add” node and it only works on the actual map, not the material node.
Does anyone know if there is a way to render out the normals with the maps as well? I’m trying to denoise in post and that is really needed in order to get that to work properly.
The render looks something like this:
But I’m after this:
These aren’t my actual images. I just threw a couple textures onto the default rounded cube material to illustrate this.
Is it not a scaling/mapping type issue? What happens if you put the mapping type as box?
No, as far as I can find, keyshot doesn’t give you access to the normals with the bump/normal maps included. The render output for normal is only based on the geometry.
I’m not sure if keyshot is doing something “behind the scenes” that accomplishes that, but if you plan on denoising elsewhere with OptiX or OpenAi then the bumps/normal maps need to be included. I was just trying to illustrate that with those images.
I’m trying to figure out if someone has accomplished a way, in keyshot, to output the normals that include the maps.
1 Like
Ah you mean the render pass, that I didn’t understand. Wonder if it’s supposed to leave the normal textures out. I’m not behind my PC currently but will test some later.
I believe Keyshot uses optix 2.3.3, which is from 2010. Denoise works with my 4070, but doesn’t work with the machine I built for rendering (not only in keyshot) which I landed on 5070 Ti. I went with that because moving to a 3070 or 4070 saved me no money at the time, so figured I might as well get some of the built-in AI benefits. 
So I’m trying to figure out a way to output just the normals. The ideal situation would be to not have to do a full render either.
I tested out denoising my existing renders in blender compositor and it does surprisingly good, but it falls apart for things like anisotropic materials. The compositor there uses OpenAI, not Optix. I’ve got the texture visible in albedo/diffuse, which helps keep a lot of the detail since it matches the same shape, but highlights that fall over bumps and small textures get a bit muddy because the normal I’m feeding it is just the surface data so the denoiser doesn’t have any instruction for the small surface deformation.
For depth maps, you can apply a material to the whole scene and control the start and end point rather than just relying on a floating point image to handle wherever the data might be and having the users normalize it later. I was hopeful there was a trick like that. I can preview them, so keyshot is capable of displaying it. I think we just don’t have access to it. I think I’m just out of luck.
That’s actually interesting usage of a normal map. I did sometimes try to use other denoiser options because the one in KS is still pretty aggressive, although it got a lot better.
In V-Ray I know you can pick V-Ray/Intel/Nvidia as denoiser and in reality they all have their pros and cons.
It’s a bit of a side step but I asked for the option to preview the entire real-time view as normal/rough/diffuse/metallic/occlusion/etc that data is available in the GPU and I use it a lot when using Substance Painter to get a good overview on for example the roughness values between objects and you can immediately see if you haven’t set a roughness for example. Same when comparing normals and to check if they all are in the right ratio compared to each other.
If you would also be able to render such screens it would also solve your issue with the normal map I think. Don’t think it’s too hard to implement either but a great help to fine-tune materials.
Yeah, that would be quite useful. I would like to be able to view that as well. The same goes for render layers because sometimes I have them set up, but then it renders with a part missing for whatever reason. It would be nice to know that something isn’t working as planned before the render is completely finished, because you don’t get a preview of that, ever.
I’ve had similar needs that you describe. I’ve had to preview one material, take a screengrab and paste that into PureRef (it’s a nice little software) then switch to another one so I can compare values. Instead, if that’s all I’m looking at, it would be useful if I just were to look at one mode for sure.
It is pretty nuts how few samples you can get away with when you’ve got a good denoiser that is using normal, diffuse, and the noisy image. And yeah, some are much better at DOF, transparency, and things like that.
PureRef looks like a smart tool! It has been some time since I looked for it but I would love a local tool which would automatically put keywords to images I’ve stored or even 3D models. Can’t be too hard with AI but a lot of those tools are online in a cloud and I don’t want to be in a cloud.
Do you use any other external tools for denoise besides Blender’s Compositor? Love to try a few. Must say that often KeyShot works good enough for my usage but sometimes small imperfections I put a lot of time in are gone.
Also would love KS to be able to use a noise threshold to stop rendering instead of the time/sample based trigger. Makes it much easier to have a consistent quality throughout an animation. And if you only need a rough you simply increase the noise value allowed. Saves also time and the puzzle to see what’s the most complex frame and how much samples/time it needs.
I want to add, I was given bad information. I’m sure Keyshot is using a newer version of Optix than I thought, although which denoiser you’re getting seems to be unclear.
I tried using another denoiser tool, but Blender seemed to be the easiest path.
For your tagging question…sort of. There are local processors for sure. I used QMagie (a built in QNAP software in a NAS) that processed images locally with a few very general statements and it also matched faces.
From what I found, you’ll need a system that is set up to handle sidecar files because too many image formats don’t accept tags directly. I hadn’t found anything great for that quite yet, but it’s not at the top of my list.
And yes, keyshot might be able to produce images faster by using noise threshold. I’d love to see alpha noise threshold separated also in order to help handle DOF with transparency. Maybe that’s down the road though.
Progressive samples is predictable. Noise threshold can really be challenging if you have fireflies that you aren’t noticing.
1 Like